DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 58

1
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
2
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality ...
BASE
Show details
3
ANLIzing the Adversarial Natural Language Inference Dataset
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
4
Investigating Failures of Automatic Translation in the Case of Unambiguous Gender ...
BASE
Show details
5
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
BASE
Show details
6
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
BASE
Show details
7
Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network ...
BASE
Show details
8
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs
In: Transactions of the Association for Computational Linguistics, 9 (2021)
BASE
Show details
9
UnNatural Language Inference ...
BASE
Show details
10
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
BASE
Show details
11
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
BASE
Show details
12
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
BASE
Show details
13
Intrinsic Probing through Dimension Selection ...
BASE
Show details
14
Information-Theoretic Probing for Linguistic Structure ...
BASE
Show details
15
Information-Theoretic Probing for Linguistic Structure ...
BASE
Show details
16
Intrinsic Probing through Dimension Selection ...
Abstract: Most modern NLP systems make use of pre-trained contextual representations that attain astonishingly high performance on a variety of tasks. Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it. In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing popular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted. To enable intrinsic probing, we propose a novel framework based on a decomposable multivariate Gaussian probe that allows us to determine whether the linguistic information in word embeddings is dispersed or focal. We then probe fastText and BERT for various morphosyntactic attributes across 36 languages. We find that most attributes are reliably encoded by only a few neurons, with fastText ... : Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) ...
URL: http://hdl.handle.net/20.500.11850/462314
https://dx.doi.org/10.3929/ethz-b-000462314
BASE
Hide details
17
Predicting Declension Class from Form and Meaning ...
BASE
Show details
18
Predicting declension class from form and meaning
BASE
Show details
19
Measuring the Similarity of Grammatical Gender Systems by Comparing Partitions
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
20
Pareto Probing: Trading Off Accuracy for Complexity
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details

Page: 1 2 3

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
58
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern